skip to main content


Search for: All records

Creators/Authors contains: "Morris, Michele"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Genomics has grown exponentially over the last decade. Common variants are associated with physiological changes through statistical strategies such as Genome-Wide Association Studies (GWAS) and quantitative trail loci (QTL). Rare variants are associated with diseases through extensive filtering tools, including population genomics and trio-based sequencing (parents and probands). However, the genomic associations require follow-up analyses to narrow causal variants, identify genes that are influenced, and to determine the physiological changes. Large quantities of data exist that can be used to connect variants to gene changes, cell types, protein pathways, clinical phenotypes, and animal models that establish physiological genomics. This data combined with bioinformatics including evolutionary analysis, structural insights, and gene regulation can yield testable hypotheses for mechanisms of genomic variants. Molecular biology, biochemistry, cell culture, CRISPR editing, and animal models can test the hypotheses to give molecular variant mechanisms. Variant characterizations can be a significant component of educating future professionals at the undergraduate, graduate, or medical training programs through teaching the basic concepts and terminology of genetics while learning independent research hypothesis design. This article goes through the computational and experimental analysis strategies of variant characterization and provides examples of these tools applied in publications. © 2022 American Physiological Society. Compr Physiol 12:3303-3336, 2022. 
    more » « less
  2. In the age of genomics, public understanding of complex scientific knowledge is critical. To combat reductionistic views, it is necessary to generate and organize educational material and data that keep pace with advances in genomics. The view that CCR5 is solely the receptor for HIV gave rise to demand to remove the gene in patients to create host HIV resistance, underestimating the broader roles and complex genetic inheritance of CCR5. A program aimed at providing research projects to undergraduates, known as CODE, has been expanded to build educational material for genes such as CCR5 in a rapid approach, exposing students and trainees to large bioinformatics databases and previous experiments for broader data to challenge commitment to biological reductionism. Our students organize expression databases, query environmental responses, assess genetic factors, generate protein models/dynamics, and profile evolutionary insights into a protein such as CCR5. The knowledgebase generated in the initiative opens the door for public educational information and tools (molecular videos, 3D printed models, and handouts), classroom materials, and strategy for future genetic ideas that can be distributed in formal, semiformal, and informal educational environments. This work highlights that many factors are missing from the reductionist view of CCR5, including the role of missense variants or expression of CCR5 with neurological phenotypes and the role of CCR5 and the delta32 variant in complex critical care patients with sepsis. When connected to genomic stories in the news, these tools offer critically needed Ethical, Legal, and Social Implication (ELSI) education to combat biological reductionism. 
    more » « less
  3. Abstract Objective In response to COVID-19, the informatics community united to aggregate as much clinical data as possible to characterize this new disease and reduce its impact through collaborative analytics. The National COVID Cohort Collaborative (N3C) is now the largest publicly available HIPAA limited dataset in US history with over 6.4 million patients and is a testament to a partnership of over 100 organizations. Materials and Methods We developed a pipeline for ingesting, harmonizing, and centralizing data from 56 contributing data partners using 4 federated Common Data Models. N3C data quality (DQ) review involves both automated and manual procedures. In the process, several DQ heuristics were discovered in our centralized context, both within the pipeline and during downstream project-based analysis. Feedback to the sites led to many local and centralized DQ improvements. Results Beyond well-recognized DQ findings, we discovered 15 heuristics relating to source Common Data Model conformance, demographics, COVID tests, conditions, encounters, measurements, observations, coding completeness, and fitness for use. Of 56 sites, 37 sites (66%) demonstrated issues through these heuristics. These 37 sites demonstrated improvement after receiving feedback. Discussion We encountered site-to-site differences in DQ which would have been challenging to discover using federated checks alone. We have demonstrated that centralized DQ benchmarking reveals unique opportunities for DQ improvement that will support improved research analytics locally and in aggregate. Conclusion By combining rapid, continual assessment of DQ with a large volume of multisite data, it is possible to support more nuanced scientific questions with the scale and rigor that they require. 
    more » « less